Experiments on Learning by Back Propagation

نویسندگان

  • David C. Plaut
  • Steven J. Nowlan
  • Geoffrey E. Hinton
چکیده

Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. The speed of learning is strongly dependent on the shape of the surface formed by the error measure in "weight space." We give examples of the shape of the error surface for a typical task and illustrate how an acceleration method speeds up descent in weight space. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. We give some preliminary results on scaling and show how the magnitude of the optimal weight changes depends on the fan-in of the units. Additional results illustrate the effects on learning speed of the amount of interaction between the weights. A variation of the learning procedure that back-propagates desired state information rather than error gradients is developed and compared with the standard procedure. Finally, we discuss the relationship between our iterative networks and the "analog" networks described by Hopfield and Tank [Hopfield and Tank 85]. The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-Supervised Learning Based Prediction of Musculoskeletal Disorder Risk

This study explores a semi-supervised classification approach using random forest as a base classifier to classify the low-back disorders (LBDs) risk associated with the industrial jobs. Semi-supervised classification approach uses unlabeled data together with the small number of labelled data to create a better classifier. The results obtained by the proposed approach are compared with those o...

متن کامل

Symbolic Neural Symbolic Algorithm/ Algorithm Network Neural Network

Neural networks offer an intriguing set of techniques for learning based on the ad-justment of weights of connections between processing units. However, the powerand limitations of connectionist methods for learning, such as the method of backpropagation in parallel distributed processing networks, are not yet entirely clear.We report on a set of experiments that more precisely ...

متن کامل

Back Propagation is Sensitive to Initial Conditions

This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additio...

متن کامل

Learning Generative ConvNet with Continuous Latent Factors by Alternating Back-Propagation

The supervised learning of the discriminative convolutional neural network (ConvNet or CNN) is powered by back-propagation on the parameters. In this paper, we show that the unsupervised learning of a popular top-down generative ConvNet model with latent continuous factors can be accomplished by a learning algorithm that consists of alternatively performing back-propagation on both the latent f...

متن کامل

An Adaptive Training Method of Back - Propagation Algorithm

Currently, the back-propagation is the most widely applied neural network algorithm at present. However, its slow learning speed and local minima problem are often cited as the major weakness of the algorithm. In this paper, described are an adaptive training algorithm based on selective retraining of patterns through error analysis, and dynamic adaptation of learning rate and momentum through ...

متن کامل

Adaptive Back-Propagation in On-Line Learning of Multilayer Networks

An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework , both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry bet...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1986